对于自动导航和机器人应用,正确感知环境至关重要。存在许多用于此目的的感应方式。近年来,一种使用的方式是空中成像声纳。它在具有灰尘或雾之类的粗糙条件的复杂环境中是理想的选择。但是,就像大多数传感方式一样,要感知移动平台周围的完整环境,需要多个此类传感器来捕获完整的360度范围。当前,用于创建此数据的处理算法不足以以相当快的更新速率为多个传感器这样做。此外,需要一个灵活而健壮的框架,以轻松地将多个成像声纳传感器实现到任何设置中,并为数据提供多种应用程序类型。在本文中,我们提出了一个专为这种新型传感方式而设计的传感器网络框架。此外,提出了在图形处理单元上的处理算法的实现,以减少计算时间,以便以足够高的更新速率实时处理一个或多个成像声纳传感器。
translated by 谷歌翻译
估计六级自由人体姿势的系统已有二十年多了。诸如运动捕获摄像机,高级游戏外围设备以及最近的深度学习技术和虚拟现实系统等技术都显示出令人印象深刻的结果。但是,大多数提供高精度和高精度的系统都是昂贵的,并且不容易操作。最近,已经进行了研究以使用HTC Vive虚拟现实系统估算人体姿势。该系统显示出准确的结果,同时将成本保持在1000美元以下。该系统使用光学方法。通过在接收器硬件上使用照片二极管来跟踪两个发射器设备发射红外脉冲和激光平面。以前开发了使用这些发射器设备与低成本定制接收器硬件结合使用的系统,但需要手动测量发射机设备的位置和方向。这些手动测量可能很耗时,容易出错,并且在特定设置中不可能。我们提出了一种算法,以使用自定义接收器/校准硬件的任何选择的环境中自动校准发射机设备的姿势。结果表明,校准在各种设置中起作用,同时比手动测量所允许的更准确。此外,校准运动和速度对结果的精度没有明显的影响。
translated by 谷歌翻译
在自主移动平台的各种和动态室内环境中导航仍然是一项复杂的任务。尤其是当条件恶化时,典型的传感器模式可能无法最佳运行,随后为安全导航控制提供了INAPT输入。在这项研究中,我们提出了一种使用分层控制系统具有单个或几个声纳传感器的移动平台进行动态室内环境导航的方法。这些传感器可以在雨,雾,灰尘或污垢等条件下运行。不同的控制层,例如避免碰撞和行为后的走廊,根据声流图像的融合中的声流队列被激活。这项工作的新颖性使这些传感器可以自由放置在移动平台上,并提供了基于移动平台周围分区系统设计最佳导航结果的框架。本文介绍的是使用的声流模型以及分层控制器的设计。在模拟中的验证旁边,在真实的办公室环境中使用具有一个,两个或三个声纳传感器的真实移动平台在真实的办公室环境中实现了实施和验证,并通过2D导航实时实时。在模拟和实际实验中均已验证了多个传感器布局,以证明控制器和传感器融合的模块化方法最佳起作用。这项工作的结果显示了具有动态对象的室内环境的稳定且安全的导航。
translated by 谷歌翻译
在空间上导航和动态环境是自主代理的关键任务之一。在本文中,我们提出了一种新颖的方法,该方法可以通过一个或多个3D-sonar传感器导航移动平台。移动移动平台,然后在其上移动任何3D-sonar传感器,将随着传感器读取中回声反射的时间而创建签名变化。提出了一种方法,可以为任何运动类型创建这些签名变化的预测模型。此外,该模型是自适应的,可用于移动平台上一个或多个声纳传感器的任何位置和方向。我们建议使用这种自适应模型并将所有感官读数融合来创建一个分层的控制系统,允许移动平台执行一组原始运动,例如避免碰撞,避免障碍物,避开障碍物,跟随墙壁和走廊跟随行为,以动态移动环境导航环境其中的对象。本文描述了整个导航模型的基本理论基础,并在模拟环境中验证了它,结果表明该系统稳定,并为一个或多个声纳传感器的多种测试空间配置提供了预期的行为,可以完成自主导航任务。
translated by 谷歌翻译
在模拟中设计和验证传感器应用程序和算法是现代开发过程中的重要一步。此外,现代的开源多传感器仿真框架正在朝着视频游戏引擎(例如虚幻引擎)的使用。在这种实时软件中,对激光雷达等传感器的仿真可能很难。在本文中,我们根据其物理特性和与环境的相互作用进行了GPU加速模拟。我们根据传感器的性质以及光束撞击表面的表面材料和入射角提供了深度和强度数据的产生。它针对真实的激光雷达传感器进行了验证,并证明是准确和精确的,尽管高度依赖于用于材料特性的光谱数据。
translated by 谷歌翻译
Modeling lies at the core of both the financial and the insurance industry for a wide variety of tasks. The rise and development of machine learning and deep learning models have created many opportunities to improve our modeling toolbox. Breakthroughs in these fields often come with the requirement of large amounts of data. Such large datasets are often not publicly available in finance and insurance, mainly due to privacy and ethics concerns. This lack of data is currently one of the main hurdles in developing better models. One possible option to alleviating this issue is generative modeling. Generative models are capable of simulating fake but realistic-looking data, also referred to as synthetic data, that can be shared more freely. Generative Adversarial Networks (GANs) is such a model that increases our capacity to fit very high-dimensional distributions of data. While research on GANs is an active topic in fields like computer vision, they have found limited adoption within the human sciences, like economics and insurance. Reason for this is that in these fields, most questions are inherently about identification of causal effects, while to this day neural networks, which are at the center of the GAN framework, focus mostly on high-dimensional correlations. In this paper we study the causal preservation capabilities of GANs and whether the produced synthetic data can reliably be used to answer causal questions. This is done by performing causal analyses on the synthetic data, produced by a GAN, with increasingly more lenient assumptions. We consider the cross-sectional case, the time series case and the case with a complete structural model. It is shown that in the simple cross-sectional scenario where correlation equals causation the GAN preserves causality, but that challenges arise for more advanced analyses.
translated by 谷歌翻译
Deep learning models are known to put the privacy of their training data at risk, which poses challenges for their safe and ethical release to the public. Differentially private stochastic gradient descent is the de facto standard for training neural networks without leaking sensitive information about the training data. However, applying it to models for graph-structured data poses a novel challenge: unlike with i.i.d. data, sensitive information about a node in a graph cannot only leak through its gradients, but also through the gradients of all nodes within a larger neighborhood. In practice, this limits privacy-preserving deep learning on graphs to very shallow graph neural networks. We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph. We develop three random-walk-based methods for generating such disjoint subgraphs and perform a careful analysis of the data-generating distributions to provide strong privacy guarantees. Through extensive experiments, we show that our method greatly outperforms the state-of-the-art baseline on three large graphs, and matches or outperforms it on four smaller ones.
translated by 谷歌翻译
Data-driven models such as neural networks are being applied more and more to safety-critical applications, such as the modeling and control of cyber-physical systems. Despite the flexibility of the approach, there are still concerns about the safety of these models in this context, as well as the need for large amounts of potentially expensive data. In particular, when long-term predictions are needed or frequent measurements are not available, the open-loop stability of the model becomes important. However, it is difficult to make such guarantees for complex black-box models such as neural networks, and prior work has shown that model stability is indeed an issue. In this work, we consider an aluminum extraction process where measurements of the internal state of the reactor are time-consuming and expensive. We model the process using neural networks and investigate the role of including skip connections in the network architecture as well as using l1 regularization to induce sparse connection weights. We demonstrate that these measures can greatly improve both the accuracy and the stability of the models for datasets of varying sizes.
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译